Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Storage location assignment optimization of stereoscopic warehouse based on genetic simulated annealing algorithm
ZHU Jie, ZHANG Wenyi, XUE Fei
Journal of Computer Applications    2020, 40 (1): 284-291.   DOI: 10.11772/j.issn.1001-9081.2019061035
Abstract592)      PDF (1063KB)(409)       Save
Concerning the problem of storage location assignment in automated warehouse, combined with the operational characteristics and security requirements of warehouse, a multi-objective model for automated stereoscopic warehouse storage location assignment was constructed, and an adaptive improved Simulated Annealing Genetic Algorithm (SAGA) based on Sigmoid curve for solving the model was proposed. Firstly, aiming at reducing the loading and unloading time of items, the distance between items in same group and the gravity center of shelf, a storage location optimization model was established. Then, in order to overcome the shortcomings of poor local search ability and being easily fall into local optimum of Genetic Algorithm (GA), the adaptive cross mutation operation based on Sigmoid curve and the reversed operation were introduced and fused with SAGA. Finally, the optimization, stability and convergence of the improved genetic SAGA were tested. The experimental results show that compared with Simulated Annealing (SA) algorithm, the proposed algorithm has the optimization degree of loading and unloading time of items increased by 37.7949 percentage points, the optimization degree of distance between items in same group improved by 58.4630 percentage points, and optimization degree of gravity center of shelf increased by 25.9275 percentage points, meanwhile the algorithm has better stability and convergence. It proves the effectiveness of the improved genetic SAGA to solve the problem. The algorithm can provide a decision method for automated warehouse storage location assignment optimization.
Reference | Related Articles | Metrics
Scheduling strategy of cloud robots based on parallel reinforcement learning
SHA Zongxuan, XUE Fei, ZHU Jie
Journal of Computer Applications    2019, 39 (2): 501-508.   DOI: 10.11772/j.issn.1001-9081.2018061406
Abstract412)      PDF (1403KB)(330)       Save
In order to solve the problem of slow convergence speed of reinforcement learning tasks with large state space, a priority-based parallel reinforcement learning task scheduling strategy was proposed. Firstly, the convergence of Q-learning in asynchronous parallel computing mode was proved. Secondly, complex problems were divided according to state spaces, then sub-problems and computing nodes were matched at the scheduling center, and each computing node completed the reinforcement learning tasks of sub-problems and gave feedback to the center to realize parallel reinforcement learning in the computer cluster. Finally, the experimental environment was built based on CloudSim, the parameters such as optimal step length, discount rate and sub-problem size were solved and the performance of the proposed strategy with different computing nodes was proved by solving practical problems. With 64 computing nodes, compared with round-robin scheduling and random scheduling, the efficiency of the proposed strategy was improved by 61% and 86% respectively. Experimental results show that the proposed scheduling strategy can effectively speed up the convergence under parallel computing, and it takes about 1.6×10 5 s to get the optimal strategy for the control probelm with 1 million state space.
Reference | Related Articles | Metrics
Measure method and properties of weighted hypernetwork
LIU Shengjiu, LI Tianrui, YANG Zonglin, ZHU Jie
Journal of Computer Applications    2019, 39 (11): 3107-3113.   DOI: 10.11772/j.issn.1001-9081.2019050806
Abstract483)      PDF (913KB)(362)       Save
Hypernetwork is a kind of networks which is more complex than the ordinary complex network. Hypernetwork can describe complex system existing in the real world more appropriately than complex network since every hyperedge of it can connect any number of nodes. A new method to measure hypernetwork-Hypernetwork Dimension (HD) was proposed aiming to the shortcomings and deficiencies of existing measure method of hypernetwork. Hypernetwork dimension was expressed as twice as much as the ratio of the logarithm of the sum of all nodes' weights and product of corresponding hyperedge's weight in all hyperedges to the logarithm of the product of sum of hyperedges' weights and sum of nodes' weights. The hypernetwork dimension was able to be applied to the weighted hyperworks with many different numerical types of both nodes' weights and hyperedges' weights, such as positive real numbers, negative real numbers, pure imaginary numbers, and even complex numbers. Finally, several important properties of the proposed hypernetwork dimension were discussed.
Reference | Related Articles | Metrics
Multi-center convolutional feature weighting based image retrieval
ZHU Jie, ZHANG Junsan, WU Shufang, DONG Yukun, LYU Lin
Journal of Computer Applications    2018, 38 (10): 2778-2781.   DOI: 10.11772/j.issn.1001-9081.2018041100
Abstract395)      PDF (674KB)(396)       Save
Deep convolutional features can provide rich semantic information for image content description. In order to highlight the object content in the image representation, the multi-center convolutional feature weighting method was proposed based on the relationship between high response positions and object regions. Firstly, the pre-trained deep network model was used to extract the deep convolutional features. Secondly, the activation map was obtained by summing the feature maps in all the channels and the positions with top few highest responses were considered as the centers of the object. Thirdly, the number of the centers was considered as the scale, and the descriptors corresponding to different positions were weighted based on the distances between these centers and the positions. Finally, the image representation for image retrieval was generated by merging the image features obtained based on different numbers of centers. Compared with Sum-pooled Convolutional (SPoC) algorithm and Cross-dimensional (CroW) algorithm, the proposed method can provide scale information and highlight the object content in the image representation, and achieves excellent retrieval results in the Holiday, Oxford and Paris image retrieval datasets.
Reference | Related Articles | Metrics
Bayesian clustering algorithm for categorical data
ZHU Jie, CHEN Lifei
Journal of Computer Applications    2017, 37 (4): 1026-1031.   DOI: 10.11772/j.issn.1001-9081.2017.04.1026
Abstract638)      PDF (919KB)(504)       Save
To address the difficulty of defining a meaningful distance measure for categorical data clustering, a new categorical data clustering algorithm was proposed based on Bayesian probability estimation. Firstly, a probability model with automatic attribute-weighting was proposed, in which each categorical attribute is assigned an individual weight to indicate its importance for clustering. Secondly, a clustering objective function was derived using maximum likelihood estimation and Bayesian transformation, then a partitioning algorithm was proposed to optimize the objective function which groups data according to the weighted likelihood between objects and clusters instead of the pairwise distances. Thirdly, an expression for estimating the attribute weights was derived, indicating that the weight should be inversely proportional to the entropy of category distribution. The experiments were conducted on some real datasets and a synthetic dataset. The results show that the proposed algorithm yields higher clustering accuracy than the existing distance-based algorithms, achieving 5%-48% improvements on the Bioinformatics data with meaningful attribute-weighting results for the categorical attributes.
Reference | Related Articles | Metrics
Color based compact hierarchical image representation
ZHU Jie, WU Shufang, XIE Bojun, MA Liyan
Journal of Computer Applications    2017, 37 (11): 3238-3243.   DOI: 10.11772/j.issn.1001-9081.2017.11.3238
Abstract398)      PDF (1047KB)(417)       Save
The spatial pyramid matching method provides the spatial information by splitting an image into different cells. However, spatial pyramid matching can not match different parts of the objects well. A hierarchical image representation method based on Color Level (CL) was proposed. The class-specific discriminative colors of different levels were obtained from the viewpoint of feature fusion in CL algorithm, and then an image was iteratively split into different levels based on these discriminative colors. Finally, image representation was constructed by concatenating the histograms of different levels. To reduce the dimensionality of image representation, the Divisive Information-Theoretic feature Clustering (DITC) method was used to cluster the dictionary, and the generated compact dictionary was used for final image representation. Classification results on Soccer, Flower 17 and Flower 102 datasets, demonstrate that the proposed method can obtain satisfactory results in these datasets.
Reference | Related Articles | Metrics
Hadoop adaptive task scheduling algorithm based on computation capacity difference between node sets
ZHU Jie, LI Wenrui, WANG Jiangping, ZHAO Hong
Journal of Computer Applications    2016, 36 (4): 918-922.   DOI: 10.11772/j.issn.1001-9081.2016.04.0918
Abstract507)      PDF (783KB)(460)       Save
Aiming at the problems of the fixed task progress proportions and passive selection of slow tasks in the task speculation execution algorithm for heterogeneous cluster, an adaptive task scheduling algorithm based on the computation capacity difference between node sets was proposed. The computation capacity difference between node sets was quantified to schedule tasks by fast and slow node sets, and dynamic feedback of nodes and tasks speed were calculated to update slow node sets timely to improve the resource utilization rate and task parallelism. Within two node sets, task progress proportions were adjusted dynamically to improve the accuracy of slow tasks identification, and the fast node which executed backup tasks dynamically for slow tasks by substitute execution implementation was selected to improve the task execution efficiency. The experimental results showed that, compared with the Longest Approximate Time to End (LATE) algorithm, the proposed algorithm reduced the running time by 5.21%, 20.51% and 23.86% respectively in short job set, mixed-type job set and mixed-type job set with node performance degradation, and reduced the number of initiated backup tasks significantly. The proposed algorithm can make the task adapt to the node difference, and improves the overall job execution efficiency effectively with reducing slow backup tasks.
Reference | Related Articles | Metrics
Resource matching maximum set job scheduling algorithm under Hadoop
ZHU Jie, LI Wenrui, ZHAO Hong, LI Ying
Journal of Computer Applications    2015, 35 (12): 3383-3386.   DOI: 10.11772/j.issn.1001-9081.2015.12.3383
Abstract613)      PDF (725KB)(332)       Save
Concerning the problem that jobs of high proportion of resources execute inefficiently in job scheduling algorithms of the present hierarchical queues structure, the resource matching maximum set algorithm was proposed. The proposed algorithm analysed job characteristics, introduced the percentage of completion, waiting time, priority and rescheduling times as urgent value factors. Jobs with high proportion of resources or long waiting time were preferentially considered to improve jobs fairness. Under the condition of limited amount of available resources, the double queues was applied to preferentially select jobs with high urgent values, select the maximum job set from job sets with different proportion of resources in order to achieve scheduling balance. Compared with the Max-min fairness algorithm, it is shown that the proposed algorithm can decrease average waiting time and improve resource utilization. The experimental results show that by using the proposed algorithm, the running time of the same type job set which consisted of jobs of different proportion of resources is reduced by 18.73%, and the running time of jobs of high proportion of resources is reduced by 27.26%; the corresponding percentages of reduction of the running time of the mixed-type job set are 22.36% and 30.28%. The results indicate that the proposed algorithm can effectively reduce the waiting time of jobs of high proportion of resources and improve the overall jobs execution efficiency.
Reference | Related Articles | Metrics
Feature selection method based on integration of mutual information and fuzzy C-means clustering
ZHU Jiewen XIAO Jun
Journal of Computer Applications    2014, 34 (9): 2608-2611.  
Abstract219)      PDF (774KB)(400)       Save

Plenty of redundant features may reduce the performance of data classification in massive dataset, so a new method of automatic feature selection based on the integration of Mutual Information and Fuzzy C-Means (FCM) clustering, named FCC-MI, was proposed to resolve this problem. Firstly, MI and its correlation function were analyzed, then the features were sorted according to the correlation value. Secondly, the data was grouped according to the feature with the maximum correlation, and the number of the optimal features were determined automatically by FCM clustering method. At last, the optimization selection of the features was performed using correlation value. Experiments on seven datasets of UCI machine learning database were conducted to compare FCC-MI with three methods come from the literatures, including WCMFS (Within class variance and Correlation Measure Feature Selection), B-AMBDMI (Based on Approximating Markov Blank and Dynamic Mutual Information), and T-MI-GA (Two-stage feature selection algorithm based on MI and GA). The theoretical analysis and experimental results show that the proposed method not only improves the efficiency of data classification, but also ensures the classification accuracy and automatically determine the optimal feature subset, which reduces the number of the features of the dataset, thus it is suitable for feature reduction and analysis of mass data with large correlation features.

Reference | Related Articles | Metrics
Three-queue job scheduling algorithm based on Hadoop
ZHU Jie ZHAO Hong LI Wenrui
Journal of Computer Applications    2014, 34 (11): 3227-3230.   DOI: 10.11772/j.issn.1001-9081.2014.11.3227
Abstract184)      PDF (756KB)(524)       Save

Single queue job scheduling algorithm in homogeneous Hadoop cluster causes short jobs waiting and low utilization rate of resources; multi-queue scheduling algorithms solve problems of unfairness and low execution efficiency, but most of them need setting parameters manually, occupy resources each other and are more complex. In order to resolve these problems, a kind of three-queue scheduling algorithm was proposed. The algorithm used job classifications, dynamic priority adjustment, shared resource pool and job preemption to realize fairness, simplify the scheduling flow of normal jobs and improve concurrency. Comparison experiments with First In First Out (FIFO) algorithm were given under three kinds of situations, including that the percentage of short jobs is high, the percentages of all types of jobs are similar, and the general jobs are major with occasional long and short jobs. The proposed algorithm reduced the running time of jobs. The experimental results show that the execution efficiency increase of the proposed algorithm is not obvious when the major jobs are short ones; however, when the assignments of all types of jobs are balanced, the performance is remarkable. This is consistent with the algorithm design rules: prioritizing the short jobs, simplifying the scheduling flow of normal jobs and considering the long jobs, which improves the scheduling performance.

Reference | Related Articles | Metrics